Goto

Collaborating Authors

 Sullivan County


Any Large Language Model Can Be a Reliable Judge: Debiasing with a Reasoning-based Bias Detector

Yang, Haoyan, Bao, Runxue, Xiao, Cao, Ma, Jun, Bhatia, Parminder, Gao, Shangqian, Kass-Hout, Taha

arXiv.org Artificial Intelligence

LLM-as-a-Judge has emerged as a promising tool for automatically evaluating generated outputs, but its reliability is often undermined by potential biases in judgment. Existing efforts to mitigate these biases face key limitations: in-context learning-based methods fail to address rooted biases due to the evaluator's limited capacity for self-reflection, whereas fine-tuning is not applicable to all evaluator types, especially closed-source models. To address this challenge, we introduce the Reasoning-based Bias Detector (RBD), which is a plug-in module that identifies biased evaluations and generates structured reasoning to guide evaluator self-correction. Rather than modifying the evaluator itself, RBD operates externally and engages in an iterative process of bias detection and feedback-driven revision. To support its development, we design a complete pipeline consisting of biased dataset construction, supervision collection, distilled reasoning-based fine-tuning of RBD, and integration with LLM evaluators. We fine-tune four sizes of RBD models, ranging from 1.5B to 14B, and observe consistent performance improvements across all scales. Experimental results on 4 bias types--verbosity, position, bandwagon, and sentiment--evaluated using 8 LLM evaluators demonstrate RBD's strong effectiveness. For example, the RBD-8B model improves evaluation accuracy by an average of 18.5% and consistency by 10.9%, and surpasses prompting-based baselines and fine-tuned judges by 12.8% and 17.2%, respectively. These results highlight RBD's effectiveness and scalability. Additional experiments further demonstrate its strong generalization across biases and domains, as well as its efficiency.


Does Local News Stay Local?: Online Content Shifts in Sinclair-Acquired Stations

Wanner, Miriam, Hager, Sophia, Field, Anjalie

arXiv.org Artificial Intelligence

Local news stations are often considered to be reliable sources of non-politicized information, particularly local concerns that residents care about. Because these stations are trusted news sources, viewers are particularly susceptible to the information they report. The Sinclair Broadcast group is a broadcasting company that has acquired many local news stations in the last decade. We investigate the effects of local news stations being acquired by Sinclair: how does coverage change? We use computational methods to investigate changes in internet content put out by local news stations before and after being acquired by Sinclair and in comparison to national news outlets. We find that there is clear evidence that local news stations report more frequently on national news at the expense of local topics, and that their coverage of polarizing national topics increases.


Captured by Captions: On Memorization and its Mitigation in CLIP Models

Wang, Wenhao, Dziedzic, Adam, Kim, Grace C., Backes, Michael, Boenisch, Franziska

arXiv.org Artificial Intelligence

Multi-modal models, such as CLIP, have demonstrated strong performance in aligning visual and textual representations, excelling in tasks like image retrieval and zero-shot classification. Despite this success, the mechanisms by which these models utilize training data, particularly the role of memorization, remain unclear. In uni-modal models, both supervised and self-supervised, memorization has been shown to be essential for generalization. However, it is not well understood how these findings would apply to CLIP, which incorporates elements from both supervised learning via captions that provide a supervisory signal similar to labels, and from self-supervised learning via the contrastive objective. To bridge this gap in understanding, we propose a formal definition of memorization in CLIP (CLIPMem) and use it to quantify memorization in CLIP models. Our results indicate that CLIP's memorization behavior falls between the supervised and self-supervised paradigms, with "mis-captioned" samples exhibiting highest levels of memorization. Additionally, we find that the text encoder contributes more to memorization than the image encoder, suggesting that mitigation strategies should focus on the text domain. Building on these insights, we propose multiple strategies to reduce memorization while at the same time improving utility--something that had not been shown before for traditional learning paradigms where reducing memorization typically results in utility decrease.


Some Tesla owners are losing trust in Elon Musk's promises of 'full self-driving'

#artificialintelligence

Washington, DC(CNN) Frustrated Tesla owners continue to wait for "full self-driving," an expensive and long-delayed software feature that isn't even guaranteed to help their cars' resale values. Some of the company's earliest backers of the "full self-driving" option are even beginning to lose faith in the promise of ever enjoying a truly autonomous Tesla. Years-long delays, buggy beta software, and the risk of no return on their investment in the option package have left some Tesla owners disappointed. Tesla CEO Elon Musk's prognostications, and Tesla's actual reality have diverged so much that some owners describe to CNN Business that they've lost confidence in his predictions. Some otherwise satisfied Tesla owners describe feeling duped into buying "full self-driving" ahead of its polished release, because Musk warned that the price would increase.


Methodology for Classifying and Indexing Case-Based Reasoning Systems in the Health Sciences

Bichindaritz, Isabelle (University of Washington Tacoma) | John C. Reed, Jr. (University of Washington Tacoma)

AAAI Conferences

As the amount of information available to researchers grows at an increasing rate, it becomes much more difficult to find relevant resources. An approach taken by several authoritative bodies, such as the Association for Computing Machinery and the U.S. National Library of Medicine, is the introduction of a classification scheme. However, even the most modern schemes are not capable of adequately distinguishing one research paper from another, due mainly to their broad generality. This paper describes a methodology for building a much narrower, specialized classification scheme focused on the area of Cased-Based Reasoning in the Health Sciences. It is derived from thorough analysis of the field, but with a framework that can be adapted to other areas. Using a tiered approach to further subdivide systems into more specific classes according to criteria specific to this particular field, this classification scheme affords interdisciplinary search, which is generally left out of generic indexing systems. This paper presents the resulting classification scheme and showcases its usefulness for classifying and tracking the evolution of research.